promise and peril
Promise and Peril of Collaborative Code Generation Models: Balancing Effectiveness and Memorization
In the rapidly evolving field of machine learning, training models with datasets from various locations and organizations presents significant challenges due to privacy and legal concerns. The exploration of effective collaborative training settings capable of leveraging valuable knowledge from distributed and isolated datasets is increasingly crucial. This study investigates key factors that impact the effectiveness of collaborative training methods in code next-token prediction, as well as the correctness and utility of the generated code, demonstrating the promise of such methods. Additionally, we evaluate the memorization of different participant training data across various collaborative training settings, including centralized, federated, and incremental training, highlighting their potential risks in leaking data. Our findings indicate that the size and diversity of code datasets are pivotal factors influencing the success of collaboratively trained code models. We show that federated learning achieves competitive performance compared to centralized training while offering better data protection, as evidenced by lower memorization ratios in the generated code. However, federated learning can still produce verbatim code snippets from hidden training data, potentially violating privacy or copyright. Our study further explores effectiveness and memorization patterns in incremental learning, emphasizing the sequence in which individual participant datasets are introduced. We also identify cross-organizational clones as a prevalent challenge in both centralized and federated learning scenarios. Our findings highlight the persistent risk of data leakage during inference, even when training data remains unseen. We conclude with recommendations for practitioners and researchers to optimize multisource datasets, propelling cross-organizational collaboration forward.
The Promise and Peril of AI
In early 2023, following an international conference that included dialogue with China, the United States released a "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy," urging states to adopt sensible policies that include ensuring ultimate human control over nuclear weapons. Yet the notion of "human control" itself is hazier than it might seem. If humans authorized a future AI system to "stop an incoming nuclear attack," how much discretion should it have over how to do so? The challenge is that an AI general enough to successfully thwart such an attack could also be used for offensive purposes. We need to recognize the fact that AI technologies are inherently dual-use.
- North America > United States (0.50)
- Europe > United Kingdom (0.05)
- Asia > China > Beijing > Beijing (0.05)
Christopher Nolan on the Promise and Peril of Technology
By the time I sat down with Christopher Nolan in his posh hotel suite not far from the White House, I guessed that he was tired of Washington, D.C. The day before, he'd toured the Oval Office and had lunch on Capitol Hill. Later that night, I'd watched him receive an award from the Federation for American Scientists, an organization that counts Robert Oppenheimer, the subject of Nolan's most recent film, among its founders. He'd endured a joke, repeated too many times by Senate Majority Leader Chuck Schumer, about the subject of his next film--"It's another biopic: Schumer." The award was sitting on an end table next to Nolan, who was dressed in brown slacks, a gray vest, and a navy suit jacket--his Anglo-formality undimmed by decades spent living in Los Angeles. "It's heavy, and glass, and good for self-defense," he said of the award, while filling his teacup.
- North America > United States > District of Columbia > Washington (0.24)
- North America > United States > California > Los Angeles County > Los Angeles (0.24)
- Asia > Middle East > Republic of Türkiye > Batman Province > Batman (0.05)
- Asia > China > Hong Kong (0.04)
- Media > Film (1.00)
- Government > Regional Government > North America Government > United States Government (0.69)
'A goldmine at our fingertips': the promise and perils of AI in Africa
In South Africa, there are drones monitoring weeds; in Mauritius, there are computers crunching health data for better outcomes for patients; and in Nairobi, surveillance systems impose a modicum of order on the chaotic traffic. The bright new future of artificial intelligence in Africa is part of the bright new future of the continent as a whole, advocates say. "One thing is clear: Africans have a goldmine at our fingertips. A rapidly growing population of 1.4 billion people, 70% under the age of 30, combined with huge growth in AI investments, creates a potent recipe … We will not sit back and wait for the rest of the world to reap our rewards," wrote Mahamudu Bawumia, the vice-president of Ghana and head of the government's economic management team, in the Guardian earlier this year. Growing alarm about the threats posed by uncontrolled innovation in artificial intelligence has prompted global leaders to hold the first ever safety summit.
- Africa > Ghana (0.55)
- Africa > Mauritius (0.26)
- Africa > Kenya > Nairobi City County > Nairobi (0.25)
- (8 more...)
- Government (1.00)
- Information Technology > Security & Privacy (0.35)
The promise and perils of using artificial intelligence to fight corruption - Nature Machine Intelligence
Corruption presents one of the biggest challenges of our time, and much hope is placed in artificial intelligence (AI) to combat it. Although the growing number of AI-based anti-corruption tools (AI-ACT) have been summarized, a critical examination of their promises and perils is lacking. Here we argue that the success of AI-ACT strongly depends on whether they are implemented top–down (by governments) or bottom–up (by citizens, non-governmental organizations or journalists). Top–down use of AI-ACT can consolidate power structures and thereby pose new corruption risks. Bottom–up use of AI-ACT has the potential to provide unprecedented means for the citizenry to keep their government and bureaucratic officials in check. We outline the societal and technical challenges that need to be overcome to harness the potential for AI to fight corruption. Despite the growing number of initiatives that employ AI to counter corruption, few studies empirically tackle the political and social consequences of embedding AI in anti-corruption efforts. The authors outline the societal and technical challenges that need to be overcome for AI to fight corruption.
Circa Hosts EEOC for Webinar on Artificial Intelligence in Employment
Circa is excited to welcome Keith E. Sonderling, Commissioner, U.S. Equal Employment Opportunity Commission (EEOC) at 11 a.m. Circa collaborates with industry experts to provide monthly educational webinars focused on trends in, diversity, equity, and inclusion in the workplace, talent acquisition, and OFCCP compliance. One of the trends that continues to be seen for talent acquisition is redefining what top talent looks like as well as where and how employers are sourcing that talent. Employers are looking for new technologies and many are turning to AI to not only help with recruitment and retention but also to ensure they are making efficient and effective decisions. While AI has been around for a while, many employers still have some fear and unease with the use of AI because of the uncertainty of how it is used in the marketplace today.
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Web (0.71)
Artificial Intelligence in Psychiatry Has Promise and Peril
Artificial intelligence (AI) has great potential for forensic psychiatry but can also bring moral hazard, said Richard Cockerill, MD, assistant professor of psychiatry at Northwestern University Feinberg School of Medicine in Chicago, Saturday at the American Academy of Psychiatry and the Law annual meeting. He defined AI as computer algorithms that can be used in specific tasks. There are two types of AI, Cockerill explained. The first type, "machine learning," involves having a computer use algorithms to perform tasks that were previously only done by humans. The second type, "deep learning," is when the computer -- using what it has learned previously -- trains itself to improve algorithms on its own, with little or no human supervision.
- North America > United States > Illinois > Cook County > Chicago (0.25)
- Europe > Netherlands (0.06)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
The latest chapter in a 100-year study says AI's promises and perils are getting real
A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology -- and to the ways in which that technology are being abused. The report, titled "Gathering Strength, Gathering Storms," was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development . AI100 was initiated by Eric Horvitz, Microsoft's chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary. The project's first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies.
The latest chapter in a 100-year study says AI's promises and perils are getting real
A newly published report on the state of artificial intelligence says the field has reached a turning point where attention must be paid to the everyday applications of AI technology -- and to the ways in which that technology are being abused. The report, titled "Gathering Strength, Gathering Storms," was issued today as part of the One Hundred Year Study on Artificial Intelligence, or AI100, which is envisioned as a century-long effort to track progress in AI and guide its future development . AI100 was initiated by Eric Horvitz, Microsoft's chief scientific officer, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary. The project's first report, published in 2016, downplayed concerns that AI would lead to a Terminator-style rise of the machines and warned that fear and suspicion about AI would impede efforts to ensure the safety and reliability of AI technologies.
- Media > News (0.32)
- Law (0.32)
- Information Technology (0.32)
Artificial Intelligence and Start-Ups in Low- and Middle-Income Countries: Progress, Promise and Perils
Around the world, artificial intelligence (AI) is automating functions and making new services possible with breakthroughs in low-cost computing power, cloud computing services, growth in big data and advancements in machine learning and related processes. This webinar discussed the current use of AI in low- and middle-income countries (LMICs), along with trends and challenges in business models, barriers to innovation and AI's ethical and responsible use towards achieving the sustainable development goals. This study examines the current use of AI in low- and middle-income countries (LMICs) across Sub-Saharan Africa, North Africa and South and Southeast Asia. The report mapped a sample of 450 start-ups by sector in alignment with the UN Sustainable ...
- Africa > Sub-Saharan Africa (0.38)
- Asia > Southeast Asia (0.31)
- Africa > North Africa (0.31)
- Asia > East Asia (0.11)